Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
End-to-end autonomous driving model based on deep visual attention neural network
HU Xuemin, TONG Xiuchi, GUO Lin, ZHANG Ruohan, KONG Li
Journal of Computer Applications    2020, 40 (7): 1926-1931.   DOI: 10.11772/j.issn.1001-9081.2019112054
Abstract391)      PDF (1287KB)(747)       Save
Aiming at the problems of low accuracy of driving command prediction, bulky model structure and a large amount of information redundancy in existing end-to-end autonomous driving methods, a new end-to-end autonomous driving model based on deep visual attention neural network was proposed. In order to effectively extract features of autonomous driving scenes, a deep visual attention neural network, which is composed of the convolutional neural network, the visual attention layer and the long short-term memory network, was proposed by introducing a visual attention mechanism into the end-to-end autonomous driving model. The proposed model was able to effectively extract spatial and temporal features of driving scene images, focus on important information and reduce information redundancy for realizing the end-to-end autonomous driving that predicts driving commands from sequential images input by front-facing camera. The data from a simulated driving environment were used for training and testing. The root mean square errors of the proposed model for prediction of the steering angle in four scenes including country road, highway, tunnel and mountain road are 0.009 14, 0.009 48, 0.002 89 and 0.010 78 respectively, which are all lower than the results of the method proposed by NVIDIA and the method based on the deep cascaded neural network. Moreover, the proposed model has fewer network layers compared with the networks without the visual attention mechanism.
Reference | Related Articles | Metrics
Motion planning for autonomous driving with directional navigation based on deep spatio-temporal Q-network
HU Xuemin, CHENG Yu, CHEN Guowen, ZHANG Ruohan, TONG Xiuchi
Journal of Computer Applications    2020, 40 (7): 1919-1925.   DOI: 10.11772/j.issn.1001-9081.2019101798
Abstract428)      PDF (2633KB)(576)       Save
To solve the problems of requiring a large number of samples, not associating with time information, and not using global navigation information in motion planning for autonomous driving based on machine learning, a motion planning method for autonomous driving with directional navigation based on deep spatio-temporal Q-network was proposed. Firstly, in order to extract the spatial features in images and the temporal information between continuous frames for autonomous driving, a new deep spatio-temporal Q-network was proposed based on the original deep Q-network and combined with the long short-term memory network. Then, to make full use of the global navigation information of autonomous driving, directional navigation was realized by adding the guide signal into the images for extracting environment information. Finally, based on the proposed deep spatio-temporal Q-network, a learning strategy oriented to autonomous driving motion planning model was designed to achieve the end-to-end motion planning, where the data of steering wheel angle, accelerator and brake were predicted from the input sequential images. The experimental results of training and testing results in the driving simulator named Carla show that in the four test roads, the average deviation of this algorithm is less than 0.7 m, and the stability performance of this algorithm is better than that of four comparison algorithms. It is proved that the proposed method has better learning performance, stability performance and real-time performance to realize the motion planning for autonomous driving with global navigation route.
Reference | Related Articles | Metrics
Motion planning algorithm of robot for crowd evacuation based on deep Q-network
ZHOU Wan, HU Xuemin, SHI Chenyin, WEI Jieling, TONG Xiuchi
Journal of Computer Applications    2019, 39 (10): 2876-2882.   DOI: 10.11772/j.issn.1001-9081.2019030507
Abstract551)      PDF (1195KB)(397)       Save
Aiming at the danger and unsatisfactory effect of dense crowd evacuation in public places in emergency, a motion planning algorithm of robots for crowd evacuation based on Deep Q-Network (DQN) was proposed. Firstly, a human-robot social force model was constructed by adding human-robot interaction to the original social force model, so that the motion state of crowd was able to be influenced by the robot force on pedestrians. Then, a motion planning algorithm of robot was designed based on DQN. The images of the original pedestrian motion state were input into the network and the robot motion behavior was output. In this process, the designed reward function was fed back to the network to enable the robot to autonomously learn from the closed-loop process of "environment-behavior-reward". Finally, the robot was able to learn the optimal motion strategies at different initial positions to maximize the total number of people evacuated after many iterations. The proposed algorithm was trained and evaluated in the simulated environment. Experimental results show that the proposed algorithm based on DQN increases the evacuation efficiency by 16.41%, 10.69% and 21.76% respectively at three different initial positions compared with the crowd evacuation algorithm without robot, which proves that the algorithm can significantly increase the number of people evacuated per unit time with flexibility and effectiveness.
Reference | Related Articles | Metrics
Motion planning model based on deep cascaded neural network for autonomous driving
BAI Liyun, HU Xuemin, SONG Sheng, TONG Xiuchi, ZHANG Ruohan
Journal of Computer Applications    2019, 39 (10): 2870-2875.   DOI: 10.11772/j.issn.1001-9081.2019040629
Abstract492)      PDF (992KB)(321)       Save
To address the problems that rule-based motion planning algorithms under constraints need pre-definition of rules and temporal features are not considered in deep learning-based methods, a motion planning model based on deep cascading neural networks was proposed. In this model, the two classical deep learning models, Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) network, were combined to build a novel cascaded neural network, the spatial and temporal features of the input images were extracted respectively, and the nonlinear relationship between the input sequential images and the output motion parameters were fit to achieve the end-to-end planning from the input sequential images to the output motion parameters. In experiments, the data of simulated environment were used for training and testing. Results show that the Root Mean Squared Error (RMSE) of the proposed model in four scenes including country road, freeway, tunnel and mountain road is less than 0.017, and the stability of the prediction results of the proposed model is better than that of the algorithm without using cascading neural network by an order of magnitude. Experimental results show that the proposed model can effectively learn human driving behaviors, eliminate the effect of cumulative errors and adapt to different scenes of a variety of road conditions with good robustness.
Reference | Related Articles | Metrics